Add JMH benchmark harness#2778
Conversation
There was a problem hiding this comment.
Code Review
This pull request integrates JMH benchmarking into the jme3-core module, establishing the build infrastructure and adding benchmarks for bounding volumes, light lists, and geometry lists. Feedback suggests expanding the benchmark source set's classpath to include main runtime and test output dependencies to prevent potential compilation or runtime errors. Additionally, a more robust modulo-based offset calculation was recommended for the light list sorting benchmark to support arbitrary light counts.
| compileClasspath += sourceSets.main.output | ||
| runtimeClasspath += output + compileClasspath |
There was a problem hiding this comment.
The benchmark source set configuration is currently missing the dependencies of the main source set and the output of the test source set. This will likely cause compilation or runtime errors if the benchmarks depend on external libraries or test utilities (like NullComparator). Using main.runtimeClasspath and test.output ensures a complete classpath for the benchmarks.
compileClasspath += sourceSets.main.runtimeClasspath + sourceSets.test.output
runtimeClasspath += sourceSets.main.runtimeClasspath + sourceSets.test.output
| @Setup(Level.Invocation) | ||
| public void setupInvocation() { | ||
| list.clear(); | ||
| int offset = invocation++ & (lightCount - 1); |
There was a problem hiding this comment.
Using a bitwise AND with lightCount - 1 to calculate the offset only works correctly when lightCount is a power of two. While the current parameters (8, 64, 256) are powers of two, this approach is fragile and will break if non-power-of-two sizes are added in the future. Using the modulo operator is more robust.
| int offset = invocation++ & (lightCount - 1); | |
| int offset = (invocation++ & Integer.MAX_VALUE) % lightCount; |
85d9391 to
cc0750d
Compare
|
🖼️ Screenshot tests have failed. The purpose of these tests is to ensure that changes introduced in this PR don't break visual features. They are visual unit tests. 📄 Where to find the report:
✅ If you did mean to change things: ✨ If you are creating entirely new tests: Note; it is very important that the committed reference images are created on the build pipeline, locally created images are not reliable. Similarly tests will fail locally but you can look at the report to check they are "visually similar". See https://github.com/jMonkeyEngine/jmonkeyengine/blob/master/jme3-screenshot-tests/README.md for more information Contact @richardTingle (aka richtea) for guidance if required |
A jmh benchmark system could be pretty helpful when trying to optimize the engine. I'm adding a few sample tests here.
Right now, nothing is set up to run in CI, but we could do that at some point. It might be a bit flaky because the CI machine and load can vary... so I'm not sure we want to try to do that.
Test with
./gradlew :jme3-core:jmh